836 research outputs found

    On the Ambartzumian-Pleijel identity in hyperbolic geometry

    Full text link
    We describe a hyperbolic version of the Ambartzumian-Pleijel identity. We use this identity to prove the hyperbolic Crofton formula and the hyperbolic isoperimetric inequality. This identity also provides a way to compute the chord length distribution for an ideal polygon in the hyperbolic plane. The analogous results for a maximally symmetric, simply connected, 22-dimensional Riemannian manifold with constant sectional curvature are given at the end.Comment: 21 pages, 11 figure

    Object-level dynamic SLAM

    Get PDF
    Visual Simultaneous Localisation and Mapping (SLAM) can estimate a camera's pose in an unknown environment and reconstruct an online map of it. Despite the advances in many real-time dense SLAM systems, most still assume a static environment, which is not a valid assumption in many real-world scenarios. This thesis aims to enable dense visual SLAM to run robustly in a dynamic environment, knowing where the sensor is in the environment, and, also importantly, what and where objects are in the surrounding environment for better scene understanding. The contributions in this thesis are threefold. The first one presents one of the first object-level dynamic SLAM systems that robustly track camera pose while detecting, tracking, and reconstructing all the objects in dynamic scenes. It can continuously fuse geometric, semantic, and motion information for each object into an octree-based volumetric representation. One of the challenges in tracking moving objects is that the object motion can easily break the illumination constancy assumption. In our second contribution, we address this issue by proposing a dense feature-metric alignment to robustly estimate camera and object poses. We will show how to learn dense feature maps and feature-metric uncertainties in a self-supervised way. They formulate a probabilistic feature-metric residual, which can be efficiently solved using Gauss-Newton optimisation and easily coupled with other residuals. So far, we can only reconstruct objects' geometry from the sensor data. Our third contribution further incorporates category-level shape prior to the object mapping. Conditioning on the depth measurement, the learned implicit function completes the unseen part while reconstructing the observed part accurately. It can yield better reconstruction completeness and more accurate object pose estimation. These three contributions in this thesis have advanced the state of the art in visual SLAM. We hope such object-level dynamic SLAM systems will help robots intelligently interact with the human-existing world.Open Acces

    Nonlinear trend removal should be carefully performed in heart rate variability analysis

    Get PDF
    ∙\bullet Background : In Heart rate variability analysis, the rate-rate time series suffer often from aperiodic non-stationarity, presence of ectopic beats etc. It would be hard to extract helpful information from the original signals. 10 ∙\bullet Problem : Trend removal methods are commonly practiced to reduce the influence of the low frequency and aperiodic non-stationary in RR data. This can unfortunately affect the signal and make the analysis on detrended data less appropriate. ∙\bullet Objective : Investigate the detrending effect (linear \& nonlinear) in temporal / nonliear analysis of heart rate variability of long-term RR data (in normal sinus rhythm, atrial fibrillation, 15 congestive heart failure and ventricular premature arrhythmia conditions). ∙\bullet Methods : Temporal method : standard measure SDNN; Nonlinear methods : multi-scale Fractal Dimension (FD), Detrended Fluctuation Analysis (DFA) \& Sample Entropy (Sam-pEn) analysis. ∙\bullet Results : The linear detrending affects little the global characteristics of the RR data, either 20 in temporal analysis or in nonlinear complexity analysis. After linear detrending, the SDNNs are just slightly shifted and all distributions are well preserved. The cross-scale complexity remained almost the same as the ones for original RR data or correlated. Nonlinear detrending changed not only the SDNNs distribution, but also the order among different types of RR data. After this processing, the SDNN became indistinguishable be-25 tween SDNN for normal sinus rhythm and ventricular premature beats. Different RR data has different complexity signature. Nonlinear detrending made the all RR data to be similar , in terms of complexity. It is thus impossible to distinguish them. The FD showed that nonlinearly detrended RR data has a dimension close to 2, the exponent from DFA is close to zero and SampEn is larger than 1.5 -- these complexity values are very close to those for 30 random signal. ∙\bullet Conclusions : Pre-processing by linear detrending can be performed on RR data, which has little influence on the corresponding analysis. Nonlinear detrending could be harmful and it is not advisable to use this type of pre-processing. Exceptions do exist, but only combined with other appropriate techniques to avoid complete change of the signal's intrinsic dynamics. 35 Keywords ∙\bullet heart rate variability ∙\bullet linear / nonlinear detrending ∙\bullet complexity analysis ∙\bullet mul-tiscale analysis ∙\bullet detrended fluctuation analysis ∙\bullet fractal dimension ∙\bullet sample entropy

    A General Maximum Progression Model to Concurrently Synchronize Left-Turn and through Traffic Flows on an Arterial

    Get PDF
    In the existing bandwidth-based methods, through traffic flows are considered as the coordination objects and offered progression bands accordingly. However, at certain times or nodes in the road network, when the left-turn traffic flows have a higher priority than the through traffic flows, it would be inappropriate to still provide the progression bands to the through traffic flows; the left-turn traffic flows should instead be considered as the coordination objects to potentially achieve better control. Considering this, a general maximum progression model to concurrently synchronize left-turn and through traffic flows is established by using a time-space diagram. The general model can deal with all the patterns of the left-turn phases by introducing two new binary variables into the constraints; that is, these variables allow all the patterns of the left-turn phases to deal with a single formulation. By using the measures of effectiveness (average delay time, average vehicle stops, and average travel time) acquired by a traffic simulation software, VISSIM, the validity of the general model is verified. The results show that, compared with the MULTIBAND, the proposed general model can effectively reduce the delay time, vehicle stops, and travel time and, thus, achieve better traffic control

    Carrier graphs for representations of the rank two free group into isometries of hyperbolic three space

    Get PDF
    Carrier graphs were first introduced for closed hyperbolic 3-manifolds by White. In this paper, we first generalize this definition to carrier graphs for representations of a rank two free group into the isometry group of hyperbolic three space. Then we prove the existence and the finiteness of minimal carrier graphs for those representations which are discrete, faithful and geometrically finite, and more generally, those that satisfy certain finiteness conditions first introduced by Bowditch.Comment: 50 pages, 32 figure

    LogoNet: a fine-grained network for instance-level logo sketch retrieval

    Full text link
    Sketch-based image retrieval, which aims to use sketches as queries to retrieve images containing the same query instance, receives increasing attention in recent years. Although dramatic progress has been made in sketch retrieval, few efforts are devoted to logo sketch retrieval which is still hindered by the following challenges: Firstly, logo sketch retrieval is more difficult than typical sketch retrieval problem, since a logo sketch usually contains much less visual contents with only irregular strokes and lines. Secondly, instance-specific sketches demonstrate dramatic appearance variances, making them less identifiable when querying the same logo instance. Thirdly, there exist several sketch retrieval benchmarking datasets nowadays, whereas an instance-level logo sketch dataset is still publicly unavailable. To address the above-mentioned limitations, we make twofold contributions in this study for instance-level logo sketch retrieval. To begin with, we construct an instance-level logo sketch dataset containing 2k logo instances and exceeding 9k sketches. To our knowledge, this is the first publicly available instance-level logo sketch dataset. Next, we develop a fine-grained triple-branch CNN architecture based on hybrid attention mechanism termed LogoNet for accurate logo sketch retrieval. More specifically, we embed the hybrid attention mechanism into the triple-branch architecture for capturing the key query-specific information from the limited visual cues in the logo sketches. Experimental evaluations both on our assembled dataset and public benchmark datasets demonstrate the effectiveness of our proposed network

    Accurate and Interactive Visual-Inertial Sensor Calibration with Next-Best-View and Next-Best-Trajectory Suggestion

    Full text link
    Visual-Inertial (VI) sensors are popular in robotics, self-driving vehicles, and augmented and virtual reality applications. In order to use them for any computer vision or state-estimation task, a good calibration is essential. However, collecting informative calibration data in order to render the calibration parameters observable is not trivial for a non-expert. In this work, we introduce a novel VI calibration pipeline that guides a non-expert with the use of a graphical user interface and information theory in collecting informative calibration data with Next-Best-View and Next-Best-Trajectory suggestions to calibrate the intrinsics, extrinsics, and temporal misalignment of a VI sensor. We show through experiments that our method is faster, more accurate, and more consistent than state-of-the-art alternatives. Specifically, we show how calibrations with our proposed method achieve higher accuracy estimation results when used by state-of-the-art VI Odometry as well as VI-SLAM approaches. The source code of our software can be found on: https://github.com/chutsu/yac.Comment: 8 pages, 11 figures, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023

    CNN Injected Transformer for Image Exposure Correction

    Full text link
    Capturing images with incorrect exposure settings fails to deliver a satisfactory visual experience. Only when the exposure is properly set, can the color and details of the images be appropriately preserved. Previous exposure correction methods based on convolutions often produce exposure deviation in images as a consequence of the restricted receptive field of convolutional kernels. This issue arises because convolutions are not capable of capturing long-range dependencies in images accurately. To overcome this challenge, we can apply the Transformer to address the exposure correction problem, leveraging its capability in modeling long-range dependencies to capture global representation. However, solely relying on the window-based Transformer leads to visually disturbing blocking artifacts due to the application of self-attention in small patches. In this paper, we propose a CNN Injected Transformer (CIT) to harness the individual strengths of CNN and Transformer simultaneously. Specifically, we construct the CIT by utilizing a window-based Transformer to exploit the long-range interactions among different regions in the entire image. Within each CIT block, we incorporate a channel attention block (CAB) and a half-instance normalization block (HINB) to assist the window-based self-attention to acquire the global statistics and refine local features. In addition to the hybrid architecture design for exposure correction, we apply a set of carefully formulated loss functions to improve the spatial coherence and rectify potential color deviations. Extensive experiments demonstrate that our image exposure correction method outperforms state-of-the-art approaches in terms of both quantitative and qualitative metrics

    Strategic protection of landslide vulnerable mountains for biodiversity conservation under land-cover and climate change impacts

    Get PDF
    Natural disasters impose huge uncertainty and loss to human lives and economic activities. Landslides are one disaster that has become more prevalent because of anthropogenic disturbances, such as land-cover changes, land degradation, and expansion of infrastructure. These are further exacerbated by more extreme precipitation due to climate change, which is predicted to trigger more landslides and threaten sustainable development in vulnerable regions. Although biodiversity conservation and development are often regarded as having a trade-off relationship, here we present a global analysis of the area with co-benefits, where conservation through expanding protection and reducing deforestation can not only benefit biodiversity but also reduce landslide risks to human society. High overlap exists between landslide susceptibility and areas of endemism for mammals, birds, and amphibians, which are mostly concentrated in mountain regions. We identified 247 mountain ranges as areas with high vulnerability, having both exceptional biodiversity and landslide risks, accounting for 25.8% of the global mountainous areas. Another 31 biodiverse mountains are classified as future vulnerable mountains as they face increasing landslide risks because of predicted climate change and deforestation. None of these 278 mountains reach the Aichi Target 11 of 17% coverage by protected areas. Of the 278 mountains, 52 need immediate actions because of high vulnerability, severe threats from future deforestation and precipitation extremes, low protection, and high-population density and anthropogenic activities. These actions include protected area expansion, forest conservation, and restoration where it could be a cost-effective way to reduce the risks of landslides

    Under-Display Camera Image Restoration with Scattering Effect

    Full text link
    The under-display camera (UDC) provides consumers with a full-screen visual experience without any obstruction due to notches or punched holes. However, the semi-transparent nature of the display inevitably introduces the severe degradation into UDC images. In this work, we address the UDC image restoration problem with the specific consideration of the scattering effect caused by the display. We explicitly model the scattering effect by treating the display as a piece of homogeneous scattering medium. With the physical model of the scattering effect, we improve the image formation pipeline for the image synthesis to construct a realistic UDC dataset with ground truths. To suppress the scattering effect for the eventual UDC image recovery, a two-branch restoration network is designed. More specifically, the scattering branch leverages global modeling capabilities of the channel-wise self-attention to estimate parameters of the scattering effect from degraded images. While the image branch exploits the local representation advantage of CNN to recover clear scenes, implicitly guided by the scattering branch. Extensive experiments are conducted on both real-world and synthesized data, demonstrating the superiority of the proposed method over the state-of-the-art UDC restoration techniques. The source code and dataset are available at \url{https://github.com/NamecantbeNULL/SRUDC}.Comment: Accepted to ICCV202
    • …
    corecore